End-to-end Learning of LDA by Mirror-Descent Back Propagation over a Deep Architecture
نویسندگان
چکیده
We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model, which maximizes the posterior probability of the prediction variable given the input document. Different from traditional variational learning or Gibbs sampling approaches, the proposed learning method applies (i) the mirror descent algorithm for exact maximum a posterior inference and (ii) back propagation with stochastic gradient descent for model parameter estimation, leading to scalable learning of the model in an end-to-end discriminative manner. As a byproduct, we also apply this technique to develop a new learning method for the traditional unsupervised LDA model. Experimental results on two real-world regression and classification tasks show that the proposed methods significantly outperform the previous supervised/unsupervised LDA learning methods.
منابع مشابه
Deep Linear Discriminant Analysis
We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations in an end-to-end fashion. Classic LDA extracts features which preserve class separability and is used for dimensionality reduction for many classification problems. The central idea of this paper is to put LDA on top of a deep neural network. This can be seen as a non-linear extension...
متن کاملDeep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification
Person re-identification is to seek a correct match for a person of interest across views among a large number of imposters. It typically involves two procedures of non-linear feature extractions against dramatic appearance changes, and subsequent discriminative analysis in order to reduce intrapersonal variations while enlarging inter-personal differences. In this paper, we introduce a hybrid ...
متن کاملTrain Feedfoward Neural Network with Layer-wise Adaptive Rate via Approximating Back-matching Propagation
Stochastic gradient descent (SGD) has achieved great success in training deep neural network, where the gradient is computed through backpropagation. However, the back-propagated values of different layers vary dramatically. This inconsistence of gradient magnitude across different layers renders optimization of deep neural network with a single learning rate problematic. We introduce the back-...
متن کاملHandwritten Character Recognition using Modified Gradient Descent Technique of Neural Networks and Representation of Conjugate Descent for Training Patterns
The purpose of this study is to analyze the performance of Back propagation algorithm with changing training patterns and the second momentum term in feed forward neural networks. This analysis is conducted on 250 different words of three small letters from the English alphabet. These words are presented to two vertical segmentation programs which are designed in MATLAB and based on portions (1...
متن کاملDeep Cascade Learning
In this paper, we propose a novel approach for efficient training of deep neural networks in a bottom-up fashion using a layered structure. Our algorithm, which we refer to as Deep Cascade Learning, is motivated by the Cascade Correlation approach of Fahlman [1] who introduced it in the context of perceptrons. We demonstrate our algorithm on networks of convolutional layers, though its applicab...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015